LIDAR点云通常通过连续旋转LIDAR传感器扫描,捕获周围环境的精确几何形状,并且对于许多自主检测和导航任务至关重要。尽管已经开发了许多3D深度体系结构,但是在分析和理解点云数据中,有效收集和大量点云的注释仍然是一个主要挑战。本文介绍了Polarmix,这是一种简单且通用的点云增强技术,但可以在不同的感知任务和场景中有效地减轻数据约束。 Polarmix通过两种跨扫描扩展策略来富含点云分布,并保留点云保真度,这些杂志沿扫描方向切割,编辑和混合点云。第一个是场景级交换,它交换了两个LiDAR扫描的点云扇区,这些扫描沿方位角轴切割。第二个是实例级旋转和粘贴,它是从一个激光雷达扫描中进行的点点实例,用多个角度旋转它们(以创建多个副本),然后将旋转点实例粘贴到其他扫描中。广泛的实验表明,Polarmix在不同的感知任务和场景中始终如一地达到卓越的性能。此外,它可以用作各种3D深度体系结构的插件,并且对于无监督的域适应性也很好。
translated by 谷歌翻译
无监督的域适配旨在对齐标记的源域和未标记的目标域,但需要访问源数据,这些源数据通常会提高数据隐私,数据便携性和数据传输效率。我们研究无监督的模型适应(UMA),或者在没有源数据的情况下称为无监督域适应,旨在使源训练模型适应目标分布而不访问源数据的替代设置。为此,我们设计了一种创新的历史对比学习(HCL)技术,利用历史来源假设来弥补UMA中的源数据。 HCL从两个角度来解决UMA挑战。首先,它介绍了通过由当前适应的模型和历史模型产生的嵌入来对目标样本学习的历史对比实例歧视(HCID)。通过历史模型,HCID鼓励UMA学习案例鉴别的目标表示,同时保留源假设。其次,它介绍了伪标签目标样本的历史对比类别歧视(HCCD)以学习类别鉴别的目标表示。具体而言,HCCD根据当前和历史模型的预测一致重新重量伪标签。广泛的实验表明,HCL优于各种视觉任务和设置始终如一地呈现和最先进的方法。
translated by 谷歌翻译
已广泛研究从合成综合数据转移到实际数据,以减轻各种计算机视觉任务(如语义分割)中的数据注释约束。然而,由于缺乏大规模合成数据集和有效的转移方法,该研究专注于2D图像及其在3D点云分割的同行落后滞后。我们通过收集Synlidar来解决这个问题,这是一个大规模合成的LIDAR数据集,其中包含具有精确的几何形状和综合语义类的Point-Wise带注释点云。 Synlidar从​​具有丰富的场景和布局的多个虚拟环境中收集,该布局由超过190亿点的32个语义课程组成。此外,我们设计PCT,一种新型点云转换器,有效地减轻了合成和实点云之间的差距。具体地,我们将合成与实际间隙分解成外观部件和稀疏性分量,并单独处理它们,这会大大改善点云转换。我们在三次转移学习设置中进行了广泛的实验,包括数据增强,半监督域适应和无监督域适应。广泛的实验表明,Synlidar提供了用于研究3D转移的高质量数据源,所提出的PCT在三个设置上一致地实现了优越的点云平移。 Synlidar项目页面:\ url {https://github.com/xiaoaoran/synlidar}
translated by 谷歌翻译
生成的对抗网络(GAN)在图像翻译和操纵方面取得了巨大成功。但是,具有忠实风格控制的高保真形象生成仍然是计算机视觉中的巨大挑战。本文提出了一种多功能的图像翻译和操纵框架,该框架通过明确构建信件来实现图像生成中准确的语义和样式指导。为了处理通过构建密集的对应关系产生的二次复杂性,我们引入了双层功能对齐策略,该策略采用顶部$ k $操作来对块构成块的功能进行排名,然后在块功能之间进行密集的关注,从而降低了内存成本的降低。由于顶部$ k $操作涉及索引交换,从而排除了梯度传播,因此我们近似具有正则地球搬运工问题的非差异上的顶部$ K $操作,以便可以有效地向后传播其梯度。此外,我们设计了一个新颖的语义位置编码机制,该机制为每个单个语义区域建立坐标,以在建立对应关系时保持纹理结构。此外,我们设计了一种新颖的置信度注入模块,该模块通过根据内置对应关系的可靠性适应特征来减轻不匹配问题。广泛的实验表明,与最先进的方法相比,我们的方法在定性和定量上取得了出色的性能。
translated by 谷歌翻译
Weakly-supervised object localization aims to indicate the category as well as the scope of an object in an image given only the image-level labels. Most of the existing works are based on Class Activation Mapping (CAM) and endeavor to enlarge the discriminative area inside the activation map to perceive the whole object, yet ignore the co-occurrence confounder of the object and context (e.g., fish and water), which makes the model inspection hard to distinguish object boundaries. Besides, the use of CAM also brings a dilemma problem that the classification and localization always suffer from a performance gap and can not reach their highest accuracy simultaneously. In this paper, we propose a casual knowledge distillation method, dubbed KD-CI-CAM, to address these two under-explored issues in one go. More specifically, we tackle the co-occurrence context confounder problem via causal intervention (CI), which explores the causalities among image features, contexts, and categories to eliminate the biased object-context entanglement in the class activation maps. Based on the de-biased object feature, we additionally propose a multi-teacher causal distillation framework to balance the absorption of classification knowledge and localization knowledge during model training. Extensive experiments on several benchmarks demonstrate the effectiveness of KD-CI-CAM in learning clear object boundaries from confounding contexts and addressing the dilemma problem between classification and localization performance.
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译
In this work, we tackle two vital tasks in automated driving systems, i.e., driver intent prediction and risk object identification from egocentric images. Mainly, we investigate the question: what would be good road scene-level representations for these two tasks? We contend that a scene-level representation must capture higher-level semantic and geometric representations of traffic scenes around ego-vehicle while performing actions to their destinations. To this end, we introduce the representation of semantic regions, which are areas where ego-vehicles visit while taking an afforded action (e.g., left-turn at 4-way intersections). We propose to learn scene-level representations via a novel semantic region prediction task and an automatic semantic region labeling algorithm. Extensive evaluations are conducted on the HDD and nuScenes datasets, and the learned representations lead to state-of-the-art performance for driver intention prediction and risk object identification.
translated by 谷歌翻译
New architecture GPUs like A100 are now equipped with multi-instance GPU (MIG) technology, which allows the GPU to be partitioned into multiple small, isolated instances. This technology provides more flexibility for users to support both deep learning training and inference workloads, but efficiently utilizing it can still be challenging. The vision of this paper is to provide a more comprehensive and practical benchmark study for MIG in order to eliminate the need for tedious manual benchmarking and tuning efforts. To achieve this vision, the paper presents MIGPerf, an open-source tool that streamlines the benchmark study for MIG. Using MIGPerf, the authors conduct a series of experiments, including deep learning training and inference characterization on MIG, GPU sharing characterization, and framework compatibility with MIG. The results of these experiments provide new insights and guidance for users to effectively employ MIG, and lay the foundation for further research on the orchestration of hybrid training and inference workloads on MIGs. The code and results are released on https://github.com/MLSysOps/MIGProfiler. This work is still in progress and more results will be published soon.
translated by 谷歌翻译
There are multiple scales of abstraction from which we can describe the same image, depending on whether we are focusing on fine-grained details or a more global attribute of the image. In brain mapping, learning to automatically parse images to build representations of both small-scale features (e.g., the presence of cells or blood vessels) and global properties of an image (e.g., which brain region the image comes from) is a crucial and open challenge. However, most existing datasets and benchmarks for neuroanatomy consider only a single downstream task at a time. To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image. Our multi-task neuroimaging benchmark (MTNeuro) is built on volumetric, micrometer-resolution X-ray microtomography images spanning a large thalamocortical section of mouse brain, encompassing multiple cortical and subcortical regions. We generated a number of different prediction challenges and evaluated several supervised and self-supervised models for brain-region prediction and pixel-level semantic segmentation of microstructures. Our experiments not only highlight the rich heterogeneity of this dataset, but also provide insights into how self-supervised approaches can be used to learn representations that capture multiple attributes of a single image and perform well on a variety of downstream tasks. Datasets, code, and pre-trained baseline models are provided at: https://mtneuro.github.io/ .
translated by 谷歌翻译
Designing better deep networks and better reinforcement learning (RL) algorithms are both important for deep RL. This work focuses on the former. Previous methods build the network with several modules like CNN, LSTM and Attention. Recent methods combine the Transformer with these modules for better performance. However, it requires tedious optimization skills to train a network composed of mixed modules, making these methods inconvenient to be used in practice. In this paper, we propose to design \emph{pure Transformer-based networks} for deep RL, aiming at providing off-the-shelf backbones for both the online and offline settings. Specifically, the Transformer in Transformer (TIT) backbone is proposed, which cascades two Transformers in a very natural way: the inner one is used to process a single observation, while the outer one is responsible for processing the observation history; combining both is expected to extract spatial-temporal representations for good decision-making. Experiments show that TIT can achieve satisfactory performance in different settings, consistently.
translated by 谷歌翻译